Adaptive Fista
نویسنده
چکیده
In this paper we propose an adaptively extrapolated proximal gradient method, which is based on the accelerated proximal gradient method (also known as FISTA), however we locally optimize the extrapolation parameter by carrying out an exact (or inexact) line search. It turns out that in some situations, the proposed algorithm is equivalent to a class of SR1 (identity minus rank 1) proximal quasi-Newton methods. Convergence is proved in a general non-convex setting, and hence, as a byproduct, we also obtain new convergence guarantees for proximal quasi-Newton methods. In case of convex problems, we can devise hybrid algorithms that enjoy the classical O(1/k2)-convergence rate of accelerated proximal gradient methods. The efficiency of the new method is shown on several classical optimization problems.
منابع مشابه
Convergence Analysis of ISTA and FISTA for “Strongly + Semi” Convex Programming
The iterative shrinkage/thresholding algorithm (ISTA) and its faster version FISTA have been widely used in the literature. In this paper, we consider general versions of the ISTA and FISTA in the more general “strongly + semi” convex setting, i.e., minimizing the sum of a strongly convex function and a semiconvex function; and conduct convergence analysis for them. The consideration of a semic...
متن کاملLocal Linear Convergence of ISTA and FISTA on the LASSO Problem
We establish local linear convergence bounds for the ISTA and FISTA iterations on the model LASSO problem. We show that FISTA can be viewed as an accelerated ISTA process. Using a spectral analysis, we show that, when close enough to the solution, both iterations converge linearly, but FISTA slows down compared to ISTA, making it advantageous to switch to ISTA toward the end of the iteration pr...
متن کاملAvoiding Communication in Proximal Methods for Convex Optimization Problems
The fast iterative soft thresholding algorithm (FISTA) is used to solve convex regularized optimization problems in machine learning. Distributed implementations of the algorithm have become popular since they enable the analysis of large datasets. However, existing formulations of FISTA communicate data at every iteration which reduces its performance on modern distributed architectures. The c...
متن کاملSupplement Materials for “ An Improved GLMNET for L 1 - regularized Logistic Regression ”
This document presents some materials not included in the paper. In Section II, we show that the solution of subproblem (13) converges to zero. In Section III, we show that newGLMNET has quadratic convergence if the loss function L(·) is strictly convex and the exact Hessian is used as H in the quadratic sub-problem. In Section IV, we show that newGLMNET terminates in finite iterations even wit...
متن کاملCT Image Reconstruction from Sparse Projections Using Adaptive TpV Regularization
Radiation dose reduction without losing CT image quality has been an increasing concern. Reducing the number of X-ray projections to reconstruct CT images, which is also called sparse-projection reconstruction, can potentially avoid excessive dose delivered to patients in CT examination. To overcome the disadvantages of total variation (TV) minimization method, in this work we introduce a novel...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2017